Goto

Collaborating Authors

 reflectance image


Super LiDAR Reflectance for Robotic Perception

Gao, Wei, Zhang, Jie, Zhao, Mingle, Zhang, Zhiyuan, Kong, Shu, Ghaffari, Maani, Song, Dezhen, Xu, Cheng-Zhong, Kong, Hui

arXiv.org Artificial Intelligence

-- Conventionally, human intuition often defines vision as a modality of passive optical sensing, while active optical sensing is typically regarded as measuring rather than the default modality of vision. However, the situation now changes: sensor technologies and data-driven paradigms empower active optical sensing to redefine the boundaries of vision, ushering in a new era of active vision . Light Detection and Ranging (LiDAR) sensors capture reflectance from object surfaces, which remains invariant under varying illumination conditions, showcasing significant potential in robotic perception tasks such as detection, recognition, segmentation, and Simultaneous Localization and Mapping (SLAM). These applications often rely on dense sensing capabilities, typically achieved by high-resolution, expensive LiDAR sensors. A key challenge with low-cost LiDARs lies in the sparsity of scan data, which limits their broader application. T o address this limitation, this work introduces an innovative framework for generating dense LiDAR reflectance images from sparse data, leveraging the unique attributes of non-repeating scanning LiDAR (NRS-LiDAR). We tackle critical challenges, including reflectance calibration and the transition from static to dynamic scene domains, facilitating the reconstruction of dense reflectance images in real-world settings. The key contributions of this work include a comprehensive dataset for LiDAR reflectance image densification, a densification network tailored for NRS-LiDAR, and diverse applications such as loop closure and traffic lane detection using the generated dense reflectance images. Experimental results validate the efficacy of the proposed approach, which successfully integrates computer vision techniques with LiDAR data processing, enhancing the applicability of low-cost LiDAR systems and establishing a novel paradigm for robotic active vision-- LiDAR as a Camera . The dataset and code are available at: T o Be Updated.


Recovering Intrinsic Images with a Global Sparsity Prior on Reflectance Carsten Rother Max Planck Institut for Informatics Microsoft Research Cambridge

Neural Information Processing Systems

We address the challenging task of decoupling material properties from lighting properties given a single image. In the last two decades virtually all works have concentrated on exploiting edge information to address this problem. We take a different route by introducing a new prior on reflectance, that models reflectance values as being drawn from a sparse set of basis colors.


Automatic Illumination Spectrum Recovery

Habili, Nariman, Oorloff, Jeremy, Petersson, Lars

arXiv.org Artificial Intelligence

We develop a deep learning network to estimate the illumination spectrum of hyperspectral images under various lighting conditions. To this end, a dataset, IllumNet, was created. Images were captured using a Specim IQ camera under various illumination conditions, both indoor and outdoor. Outdoor images were captured in sunny, overcast, and shady conditions and at different times of the day. For indoor images, halogen and LED light sources were used, as well as mixed light sources, mainly halogen or LED and fluorescent. The ResNet18 network was employed in this study, but with the 2D kernel changed to a 3D kernel to suit the spectral nature of the data. As well as fitting the actual illumination spectrum well, the predicted illumination spectrum should also be smooth, and this is achieved by the cubic smoothing spline error cost function. Experimental results indicate that the trained model can infer an accurate estimate of the illumination spectrum.


Recovering Intrinsic Images with a Global Sparsity Prior on Reflectance

Rother, Carsten, Kiefel, Martin, Zhang, Lumin, Schölkopf, Bernhard, Gehler, Peter V.

Neural Information Processing Systems

We address the challenging task of decoupling material properties from lighting properties given a single image. In the last two decades virtually all works have concentrated on exploiting edge information to address this problem. We take a different route by introducing a new prior on reflectance, that models reflectance values as being drawn from a sparse set of basis colors. This results in a Random Field model with global, latent variables (basis colors) and pixel-accurate output reflectance values. We show that without edge information high-quality results can be achieved, that are on par with methods exploiting this source of information. Finally, we present competitive results by integrating an additional edge model. We believe that our approach is a solid starting point for future development in this domain.


Recovering Intrinsic Images from a Single Image

Tappen, Marshall F., Freeman, William T., Adelson, Edward H.

Neural Information Processing Systems

We present an algorithm that uses multiple cues to recover shading and reflectance intrinsic images from a single image. Using both color information and a classifier trained to recognize gray-scale patterns, each image derivative is classified as being caused by shading or a change in the surface's reflectance. Generalized Belief Propagation is then used to propagate information from areas where the correct classification is clear to areas where it is ambiguous. We also show results on real images.


Recovering Intrinsic Images from a Single Image

Tappen, Marshall F., Freeman, William T., Adelson, Edward H.

Neural Information Processing Systems

We present an algorithm that uses multiple cues to recover shading and reflectance intrinsic images from a single image. Using both color information and a classifier trained to recognize gray-scale patterns, each image derivative is classified as being caused by shading or a change in the surface's reflectance. Generalized Belief Propagation is then used to propagate information from areas where the correct classification is clear to areas where it is ambiguous. We also show results on real images.


Recovering Intrinsic Images from a Single Image

Tappen, Marshall F., Freeman, William T., Adelson, Edward H.

Neural Information Processing Systems

We present an algorithm that uses multiple cues to recover shading and reflectance intrinsic images from a single image. Using both color information anda classifier trained to recognize gray-scale patterns, each image derivative is classified as being caused by shading or a change in the surface's reflectance. Generalized Belief Propagation is then used to propagate information from areas where the correct classification is clear to areas where it is ambiguous. We also show results on real images.


Bayesian Model of Surface Perception

Freeman, William T., Viola, Paul A.

Neural Information Processing Systems

Image intensity variations can result from several different object surface effects, including shading from 3-dimensional relief of the object, or paint on the surface itself. An essential problem in vision, which people solve naturally, is to attribute the proper physical cause, e.g.


Bayesian Model of Surface Perception

Freeman, William T., Viola, Paul A.

Neural Information Processing Systems

Image intensity variations can result from several different object surface effects, including shading from 3-dimensional relief of the object, or paint on the surface itself. An essential problem in vision, which people solve naturally, is to attribute the proper physical cause, e.g.


Bayesian Model of Surface Perception

Freeman, William T., Viola, Paul A.

Neural Information Processing Systems

Image intensity variations can result from several different object surface effects, including shading from 3-dimensional relief of the object, or paint on the surface itself. An essential problem in vision, which people solve naturally, is to attribute the proper physical cause, e.g.